July 11, 2023 Musicology No Comments

Last month the US Copyright Office hosted hours of listening sessions online in which various players discussed hopes and concerns around copyright and generative AI.

When Ghostwriter977 released “Heart On My Sleeve,” it was our wakeup call, similar in some ways to that week back in December when we all found out about ChatGPT and generative AI in the first place. “Heart On My Sleeve” left songwriters and artists looking suddenly like dinosaurs facing an impending ice age.

Copyright itself is looking a little long in the tooth.

Can we admit that we weren’t ready for this?

Technology won’t wait around for copyright law to catch up. And I’m not so sanguine about its malleability here as I’ll explain. First, some basics:

For any favorite recording of yours, there are two copyrights involved; one for the song itself, and another for the recording of the song. To illustrate: If Sinatra covered The Beatles’ “Something,” he needed to license the song, but not the recording which he didn’t use, while if Jay Z wants to sample “Let It Be” and include that audio in a new Jay Z track, he needs a license for both.

For AI developers, there are copyright issues around building the tools. If the developer wants its tool to learn how to write and record a Taylor Swift song, it might train the AI on existing Taylor Swift songs but that use would require a license. Sarah Silverman is suing right now because an AI tool was allegedly trained on her copyright protected material.

But what about using the AI tool to sing an original song in a voice that replicates Taylor Swift or some other familiar identifiable voice? This, for the time being, seems to be completely legal. Consider the current Rick Astley situation. He’s suing Yung Gravy because Gravy wrote and recorded “Betty (Get Money)” which sounds like it contains samples from “Never Gonna Give You Up.”

The fact is Astley probably doesn’t have much of a case because “Betty (Get Money)” only sounds like it contains samples. It actually doesn’t. Instead, it uses an impersonator singing like Rick Astley, and replicated rerecorded versions of the instrumental elements from Never Gonna Give You Up that are nearly indistinguishable from the original. Gravy obtained a license for the song itself but not for the recording. Betty (Get Money) certainly sounds as though it’s sampled, but Gravy recorded everything himself. So here’s the quiz: “What copyright has Yung Gravy violated?” The answer is, “none.” And Astley is not suing for infringement of copyright but rather his right to publicity; name and likeness type stuff. And while I sympathize, Astley’s is a tough row to hoe. He is probably going to get nothing. And Astley’s record company also has no claim. Copyright law invites making your own soundalike recording, as long as no actual samples are taken from another’s recording.

AI that can sing like Taylor Swift is just an impersonator and anyone who wants to write a song “in the manner of” Taylor Swift is within their rights to do so. Musicians intentionally write music in the manner of other artists all the time! Arguably EVERY time. And up until fairly recently (cough, blurred lines, cough), one could admit such an influence aloud without it being inflated into an admission of infringement as readily as it seems to be nowadays.

But again, in order to train the AI to sing like Taylor Swift in the first place, the developer might, for example, take a Taylor Swift record like “Love Story,” strip out the vocals with software designed to isolate them, create “TSwift_LoveStory_1.wav,” “_2.wav” and so forth, and let artificial intelligence absorb that collection. That use would probably be infringement. Copyright is the “right to copy” and copying is what you’ve done. You need a license, and Taylor isn’t likely giving you one, though some artists may.

Recap:
Do you need a license to have your AI impersonate Taylor Swift? No.
Do you need a license to use Taylor Swift’s recordings to train your AI in the first place? Yes.

Returning to the Astley situation for a minute, there might be other issues to consider. In intellectual property disputes, the “intent to create confusion” is a significant line in the sand you do not cross and could be a factor here. And rather more clearly, if you create the mistaken impression you’ve got Drake singing the song in your car commercial, then you’ve definitely got legal problems headed your way. That’s right of publicity. It’s one thing to impersonate someone, but you can’t misrepresent them as endorsing a product. That’s another line you don’t cross.

The future, it would seem, is getting here faster lately, and exponential learning is relatively hard for most people to intuit. Eventually AI will get good enough to compose, or generate, a compelling facsimile of a John Williams composed piece. I myself haven’t heard anything really good yet, but I don’t doubt it’s coming. But modern pop music’s vibe of repeated two-bar grooves and autotune-reliant vocals are already low hanging fruit. The endorsement of soundalike audio in copyright did not, in my estimation, foresee this kind of capability.

Similarly, I think it’s interesting that visual artists are using the workaround of taking photographs of AI- generated art and then claiming copyright of the photograph. For now at least, you don’t get to copyright the images that IA creates from your prompts. You might ask, “what if I do a whole series of prompts and iterations” but even though you might think there’s creativity in the prompts you prompt, you don’t know what’s coming out the other side until you see it. But the photograph is your photograph!

And isn’t that just as screwy? Can’t we admit we weren’t ready for this? I think it would be a good place to start.

Written by Brian McBrearty